7 - Deep Learning [ID:11603]
50 von 780 angezeigt

And again, good morning, everyone. So today I have again the pleasure of stepping in for

Professor Maja. And to kind of recap what you've dealt with the last week, you talked

about common practices. So what do you, what can you do to actually train in a neural network?

What are practical recommendations in regards to that?

What kind of tricks can you apply to find good parameters?

We told you that you should definitely separate your training and your testing set so that

you don't get overconfident results, but so that your results also translate to a real

world setting.

Then in addition to that, you talked about common architectures, so the groundbreaking

architectures in the last 30 years approximately that really brought the field of deep learning

a big step forward.

You talked about residual connections, for example.

You talked about the famous LeNet and the AlexNet, and just in general, architectures

that had a big impact in the deep learning community and the machine learning community

in general.

Today, we are going to look at a different kind of architecture, namely recurrent neural

network.

We will start with a short motivation.

What is it and why do we want to have it?

What's the benefit here?

We will then go into simple recurrent networks, continuing with a bit more complex principles

that are applied in long short-term memory units as well as gated recurrent units, and

compare the differences between the three of them and what are the benefits and drawbacks

between them.

Then we will continue with sampling strategies and sum everything up with a few examples.

What are these recurrent neural networks and why do we want to have them?

So far, we usually had a single input to the network.

For example, we had one image that was fed in into our network.

This is also why we call them feed-forward networks.

There's only one direction.

It goes from the input to the output.

There's no connection backwards within this network.

So, yeah, feed-forward is kind of the keyword that you should remember in this context here.

If we look at real-world data, this kind of ideal setting that we just have one snapshot

of a certain point in time is actually not valid.

We have a lot of sequential data or data that has a time dimension.

For example, in speech, it doesn't make sense to just have a single snapshot of a talk,

but you want to instead have the whole talk in one way or the other.

Same as with video sequences, for example, we have a number of images that have a time

dimension and we want to make use of this time dimension as well.

So they only make sense in this recording, in this time dimension that we acquired them.

And then lastly, we also have a lot of sensor data.

So for example, with respect to temperature or speed sensors in a car, where you also

want to have kind of the flow of time included into your network architecture.

So as I've said, these snapshots of this data may not be very informative.

So if you look at translation tasks, for example, a single word can have a lot of meanings, and

the actual meaning that a word has in a sentence that is spoken often depends a lot of the

context in which it is spoken.

So this temporal context around the word, around the sequence, around the snapshot is

actually very important.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:00:38 Min

Aufnahmedatum

2019-06-13

Hochgeladen am

2019-06-13 15:29:03

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

 

durch technische Probleme fehlen die ersten Minuten der Vorlesung. Wir bitten das zu entschuldigen.

Tags

backpropagation dependencies example networks learning sequence element gradients time sequences
Einbetten
Wordpress FAU Plugin
iFrame
Teilen